SplitFed: When Federated Learning Meets Split Learning
نویسندگان
چکیده
Federated learning (FL) and split (SL) are two popular distributed machine approaches. Both follow a model-to-data scenario; clients train test models without sharing raw data. SL provides better model privacy than FL due to the architecture between server. Moreover, makes option for resource-constrained environments. However, performs slower relay-based training across multiple clients. In this regard, paper presents novel approach, named splitfed (SFL), that amalgamates approaches eliminating their inherent drawbacks, along with refined architectural configuration incorporating differential PixelDP enhance data robustness. Our analysis empirical results demonstrate (pure) SFL similar accuracy communication efficiency as while significantly decreasing its computation time per global epoch in Furthermore, SL, over improves number of Besides, performance robustness measures is further evaluated under extended experimental settings.
منابع مشابه
When Semi-supervised Learning Meets Ensemble Learning
Semi-supervised learning and ensemble learning are two important machine learning paradigms. The former attempts to achieve strong generalization by exploiting unlabeled data; the latter attempts to achieve strong generalization by using multiple learners. Although both paradigms have achieved great success during the past decade, they were almost developed separately. In this paper, we advocat...
متن کاملFederated Multi-Task Learning
Federated learning poses new statistical and systems challenges in training machinelearning models over distributed networks of devices. In this work, we show thatmulti-task learning is naturally suited to handle the statistical challenges of thissetting, and propose a novel systems-aware optimization method, MOCHA, that isrobust to practical systems issues. Our method and theor...
متن کاملWhen Machine Learning Meets AI and Game Theory
We study the problem of development of intelligent machine learning applications to exploit the problems of adaptation that arise in multi-agent systems, for expected-long-termprofit maximization. We present two results. First, we propose a learning algorithm for the Iterated Prisoners Dilemma (IPD) problem. Using numerical analysis we show that it performs strictly better than the tit-for-tat ...
متن کاملWhen Lempel-Ziv-Welch Meets Machine Learning: A Case Study of Accelerating Machine Learning using Coding
In this paper we study the use of coding techniques to accelerate machine learning (ML). Coding techniques, such as prefix codes, have been extensively studied and used to accelerate low-level data processing primitives such as scans in a relational database system. However, there is little work on how to exploit them to accelerate ML algorithms. In fact, applying coding techniques for faster M...
متن کاملLearning Meets Verification
In this paper, we give an overview on some algorithms for learning automata. Starting with Biermann’s and Angluin’s algorithms, we describe some of the extensions catering for specialized or richer classes of automata. Furthermore, we survey their recent application to verification problems.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i8.20825